8 research outputs found

    An Efficient Approach for Multi-Sentence Compression

    Get PDF
    Abstract Multi Sentence Compression (MSC) is of great value to many real world applications, such as guided microblog summarization, opinion summarization and newswire summarization. Recently, word graph-based approaches have been proposed and become popular in MSC. Their key assumption is that redundancy among a set of related sentences provides a reliable way to generate informative and grammatical sentences. In this paper, we propose an effective approach to enhance the word graph-based MSC and tackle the issue that most of the state-of-the-art MSC approaches are confronted with: i.e., improving both informativity and grammaticality at the same time. Our approach consists of three main components: (1) a merging method based on Multiword Expressions (MWE); (2) a mapping strategy based on synonymy between words; (3) a re-ranking step to identify the best compression candidates generated using a POS-based language model (POS-LM). We demonstrate the effectiveness of this novel approach using a dataset made of clusters of English newswire sentences. The observed improvements on informativity and grammaticality of the generated compressions show an up to 44% error reduction over state-of-the-art MSC systems

    Predictive models for cochlear implant outcomes : performance, generalizability, and the impact of cohort size

    Get PDF
    While cochlear implants have helped hundreds of thousands of individuals, it remains difficult to predict the extent to which an individual’s hearing will benefit from implantation. Several publications indicate that machine learning may improve predictive accuracy of cochlear implant outcomes compared to classical statistical methods. However, existing studies are limited in terms of model validation and evaluating factors like sample size on predictive performance. We conduct a thorough examination of machine learning approaches to predict word recognition scores (WRS) measured approximately 12 months after implantation in adults with post-lingual hearing loss. This is the largest retrospective study of cochlear implant outcomes to date, evaluating 2,489 cochlear implant recipients from three clinics. We demonstrate that while machine learning models significantly outperform linear models in prediction of WRS, their overall accuracy remains limited (mean absolute error: 17.9-21.8). The models are robust across clinical cohorts, with predictive error increasing by at most 16% when evaluated on a clinic excluded from the training set. We show that predictive improvement is unlikely to be improved by increasing sample size alone, with doubling of sample size estimated to only increasing performance by 3% on the combined dataset. Finally, we demonstrate how the current models could support clinical decision making, highlighting that subsets of individuals can be identified that have a 94% chance of improving WRS by at least 10% points after implantation, which is likely to be clinically meaningful. We discuss several implications of this analysis, focusing on the need to improve and standardize data collection.http://journals.sagepub.com/home/tiadm2022Speech-Language Pathology and Audiolog

    Toward abstractive text summarization

    Full text link
    Automatic text summarization is the process of automatically creating a compressed version of a given text. Content reduction can be addressed by extraction or abstraction. Extractive methods select a subset of most salient parts of the source text for inclusion in the summary. In contrast, abstractive methods build an internal semantic representation to create a more human-like summary. The majority of summarizers are designed to be extractive due to the complex nature of abstraction. This thesis moves toward abstractive text summarization, and makes this task: (i) more adaptable to a wide range of applications; (ii) more dynamic to different sources and types of text; and (iii) better evaluated using semantic representations.To make it more adaptable, we propose a word graph-based multi-sentence compression approach for improving both informativity and grammaticality of summaries, which shows 44% error reduction over state-of-the-art systems. Then, we discuss adapting this approach into query-focused multi-document summarization, focusing on semantic similarities between the input query and source texts. This approach satisfies the query-biased relevance, information novelty and richness criteria.To make this task more dynamic, we appraise the coverage of knowledge sources for the purpose of abstractive text summarization, and found a decline in performance of summarizers that only rely on specific terminologies. Our approach integrates general and domain-specific lexicons for incorporating textual semantic similarities, and bridging the knowledge and language gaps in domain-specific summarizers. To fairly evaluate abstractive summaries including lexical variations and paraphrasing, we propose an approach based on both lexical and semantic similarities, which highly correlates with human judgments. Furthermore, we present an approach to evaluate summaries on test sets where model summaries are not available. Our hypothesis is that comparing semantic representations of the input and summary content leads to a more accurate evaluation. We exploit the compositional capabilities of corpus-based and lexical resource-based word embeddings for predicting the summary content quality. The experiment results support our proposal to use semantic representations for model-based and model-free evaluation of summaries
    corecore